Meet 'Chameleon' – an AI model that can protect you from facial recognition thanks to a sophisticated digital mask
A new AI model can mask a personal image without destroying its quality, which will help to protect your privacy.
Artificial intelligence (AI) could hold the key to hiding your personal photos from unwanted facial recognition software and fraudsters, all without destroying the image quality.
A new study from Georgia Tech university, published July 19 to the pre-print arXiv database, details how researchers created an AI model called "Chameleon," which can produce a digital "single, personalized privacy protection (P-3) mask" for personal photos that thwarts unwanted facial scanning from detecting a person's face. Chameleon will instead cause facial recognition scanners to recognize the photos as being someone else.
"Privacy-preserving data sharing and analytics like Chameleon will help to advance governance and responsible adoption of AI technology and stimulate responsible science and innovation," said lead author of the study Ling Liu, professor of data and intelligence-powered computing at Georgia Tech’s School of Computer Science (who developed the Chameleon model alongside other researchers), in a statement.
Facial recognition systems are now commonplace in everyday life, from police cameras to Face ID in iPhones. But unwanted or unauthorized scanning can lead to cyber criminals collecting images for scams, committing fraud or stalking. They can even collect images to build up databases to be used for unwanted advertising targeting and cyber attacks.
Making masks
While the masking of images is nothing new, existing systems often obfuscate key details of a person’s photo or fail to preserve an image of any real quality by introducing digital artifacts. To overcome this, the researchers said Chameleon has three specific features.
The first is the use of cross-image optimization that enables Chameleon to create one P3-Mask per user, rather than a new mask for each image. This means the AI system can deliver instant protection for a user, and also enables more efficient use of limited computing resources; the latter would likely be handy if Chameleon was to be adopted for use in devices like smartphones.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
Secondly, Chameleon incorporates "a perceptibility optimization" — this refers to how an image is rendered automatically, with no manual intervention or parameter setting — to ensure the visual quality of a protected facial image is preserved.
The third feature is the strengthening of a P3-Mask so that it's robust enough to foil unknown facial recognition models. This is done by integrating focal diversity-optimized ensemble learning into the mask generation process. In other words, it uses a machine learning technique that combines the predictions of multiple models to improve the accuracy of the algorithm.
Ultimately, the researchers would like to apply Chameleon’s obfuscation methods beyond the protection of individual users’ personal images.
"We would like to use these techniques to protect images from being used to train artificial intelligence generative models. We could protect the image information from being used without consent,” said Georgia Tech doctoral student Tiansheng Huang, who was also involved in the development of Chameleon.
Roland Moore-Colyer is a freelance writer for Live Science and managing editor at consumer tech publication TechRadar, running the Mobile Computing vertical. At TechRadar, one of the U.K. and U.S.’ largest consumer technology websites, he focuses on smartphones and tablets. But beyond that, he taps into more than a decade of writing experience to bring people stories that cover electric vehicles (EVs), the evolution and practical use of artificial intelligence (AI), mixed reality products and use cases, and the evolution of computing both on a macro level and from a consumer angle.